15 research outputs found

    Autonomous Mobile Vehicle based on RFID Technology using an ARM7 Microcontroller

    Full text link
    Radio Frequency Identification (RFID) system is looked upon as one of the top ten important technologies in the 20th century. Industrial automation application is one of the key issues in developing RFID. Therefore, this paper designs and implements a RFID-based autonomous mobile vehicle for more extensively application of RFID systems. The microcontroller LPC2148 is used to control the autonomous mobile vehicle and to communicate with RFID reader. By storing the moving control commands such as turn right, turn left, speed up and speed down etc. into the RFID tags beforehand and sticking the tags on the tracks, the autonomous mobile vehicle can then read the moving control commands from the tags and accomplish the proper actions. Due to the convenience and non-contact characteristic of RFID systems, the proposed mobile vehicle has great potential to be used for industrial automation, goods transportation, data transmission, and unmanned medical nursing etc. in the future. Experimental results demonstrate the validity of the proposed mobile vehicle

    Reconstruction of the pose of uncalibrated cameras via user-generated videos

    Get PDF
    Extraction of 3D geometry from hand-held unsteady uncalibrated cameras faces multiple difficulties: finding usable frames, feature-matching and unknown variable focal length to name three. We have built a prototype system to allow a user to spatially navigate playback viewpoints of an event of interest, using geometry automatically recovered from casually captured videos. The system, whose workings we present in this paper, necessarily estimates not only scene geometry, but also relative viewpoint position, overcoming the mentioned difficulties in the process. The only inputs required are video sequences from various viewpoints of a common scene, as are readily available online from sporting and music events. Our methods make no assumption of the synchronization of the input and do not require file metadata, instead exploiting the video to self-calibrate. The footage need only contain some camera rotation with little translation—for hand-held event footage a likely occurrence.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1145/2659021.265902

    Effective-medium theory of a heterogeneous medium with individual grains having a nonlocal dielectric function

    No full text
    A formulation is given to obtain the effective dielectric function ε(ω) of a heterogeneous medium in which nonlocality due to the spatial dispersion of the individual grains is important. The formulation is then applied to the calculation of the ε(ω) of a medium consisting of spherical metallic grains. A very general method of solving the boundary-value problems involving a nonlocal ε is presented. The results are presented in both the average-T-matrix approximation and the coherent-potential approximation. Numerical results are obtained with a hydrodynamic model of the metallic dielectric function although in principle other nonlocal dielectric functions could also be used. Finally the optical absorption by dye molecules adsorbed on metallic spheres is calculated, and the results are compared to those obtained by using the local-dielectric-function model for the metal. The important effects of the nonlocality manifest themselves in the characteristic shifts of the resonances and in the decrease in the peak heights

    Growth of vertical MoS 2

    No full text

    Reconstruction of the pose of uncalibrated cameras via user-generated videos

    No full text
    Extraction of 3D geometry from hand-held unsteady uncalibrated cameras faces multiple difficulties: finding usable frames, feature-matching and unknown variable focal length to name three. We have built a prototype system to allow a user to spatially navigate playback viewpoints of an event of interest, using geometry automatically recovered from casually captured videos. The system, whose workings we present in this paper, necessarily estimates not only scene geometry, but also relative viewpoint position, overcoming the mentioned difficulties in the process. The only inputs required are video sequences from various viewpoints of a common scene, as are readily available online from sporting and music events. Our methods make no assumption of the synchronization of the input and do not require file metadata, instead exploiting the video to self-calibrate. The footage need only contain some camera rotation with little translation - for hand-held event footage a likely occurrence

    Executive Summary

    No full text
    2 All author affiliations are for identification only
    corecore